Melodic cues to meter 1 Running Head: MELODIC CUES TO METER The Role of Melodic and Temporal Cues in Perceiving Musical Meter

نویسندگان

  • Erin E. Hannon
  • Joel S. Snyder
  • Tuomas Eerola
چکیده

A number of different cues allow listeners to perceive musical meter. Three experiments examined effects of melodic and temporal accents on perceived meter in excerpts from folksongs, scored in 6/8 or 3/4 meter. Participants matched excerpts with one of two metrical drum accompaniments. Melodic accents included contour change, melodic leaps, extreme register, melodic repetition, and harmonic rhythm. Two experiments with isochronous melodies showed that contour change and repetition predicted judgments. For longer melodies in the second experiment, variables predicted judgments best at the beginning of excerpts. The final experiment with rhythmically varied melodies showed that temporal accents, tempo, and contour change were the strongest predictors of meter. Our findings suggest that listeners combine multiple melodic and temporal features to perceive musical meter. Melodic cues to meter 3 The Role of Melodic and Temporal Cues in Perceiving Musical Meter Music is an important stimulus for rhythmic movement, such as tapping, bouncing, swaying, clapping, and dancing. Most psychological and music-theoretic accounts define meter as a perceptual and conceptual organization of periodically alternating strong and weak beats that is superimposed on the musical surface (Dowling & Harwood, 1986; Lerdahl & Jackendoff, 1983; Povel & Essens, 1985). This metrical structure enables individuals in a group to move in synchrony with music and with each other. Meter is also thought to guide attention in a dynamic fashion, enhancing anticipation for when events are likely to occur (Jones, 1987). Aside from what is prescribed by the notated score, however, meter does not exist primarily as a physical parameter of the music, but as an abstraction in the mind of the listener. As in the case of other cognitive structures such as linguistic grammar, significant questions remain about how we come to perceive meter in the course of development or while listening to a particular piece of music for the first time. Certainly, meter perception is constrained by basic aspects of behavior, such as the presence of superior time discrimination and production for time intervals from 300-1200 ms (Engström, Kelso, & Holroyd, 1996; Fraisse, 1978; Friberg & Sundberg, 1995; Mates, Radil, Müller, & Pöppel, 1994; Peters, 1989). Beyond this, however, it is still unclear how the rich surface structure of music, with simultaneously unfolding patterns of pitch, duration, amplitude, spectrum, and expressive timing, gives rise to the perception of a simple periodic pattern of alternating strong and weak beats. The goal of the present study is to relate listeners’ perceptions of metrical structure to various accent types that arise out of the melodic and temporal patterns in short melodies. Melodic cues to meter 4 Meter perception is an instance of the general ability to use redundant and probabilistic information from multiple sources to organize and parse complex patterns in the environment. This strategy is evident in speech segmentation, where adults (McQueen, 1998; van der Lugt, 2001) and infants (Mattys, Jusczyk, Luce, & Morgan, 1999) integrate multiple cues, such as sequential, phonotactic, or prosodic information to locate word boundaries. In isolation, these cues only partially or inconsistently predict word segmentation, but in combination they predict word boundaries reliably (Christiansen, Allen & Seidenberg, 1998). Because complex sound patterns vary along many dimensions, and because listeners are able to combine multiple sources of perceptual information, a similar strategy is likely to be used in music. Even when individual accent types have a relatively weak impact on listeners’ perceptions of metrical structure in music, the simultaneous presentation of multiple congruent accent types may create an emergent perceptual structure that affords dancing, tapping, singing and many other forms of musical participation. A periodic accent pattern is initially necessary for the listener to perceive the meter. The term accent is defined as an increase in the perceptual salience of a musical event that results when that event differs in some way from surrounding events (Benjamin, 1984; Cambouropoulos, 1997; Cooper & Meyer, 1960). This broad definition encompasses a wide range of phenomena, but many studies, described below, have emphasized temporal accents, such as duration change, grouping position, and event onset, or dynamic accents, which result from intensity change. Accents arising from surface features should be distinguished from the alternating strong and weak beats felt once a metrical structure has been inferred. Perseveration or reinterpretation of an Melodic cues to meter 5 initially perceived metrical structure depends on a dynamic interplay between the metrical framework and the surface accents. The literature on meter perception has tended to focus on temporal and dynamic accents. Various models have used these kinds of accents to recover scored meter in musical pieces and to predict synchronization behavior. Both symbolic and autocorrelation-based computer models have used event onsets, inter-onset interval (IOI) durations, and rhythmic repetition to successfully recover the scored meter in a number of musical works (Brown, 1993; Longuet-Higgins & Lee, 1982). Self-sustained oscillator models have used event onsets and intensity to predict tapping behavior (Large, 2000; Large & Kolen, 1994; Toiviainen, 1998). Other research has focused on temporal grouping accents. A temporal grouping accent arises when a tone is relatively isolated, the second of a two-tone cluster, or the initial or final tone of a cluster of three or more (Povel & Essens, 1985; Povel & Okkerman, 1981). The degree to which temporal grouping accents occurred at regular temporal intervals determined rhythm reproduction accuracy (Povel & Essens, 1985) and gap detection in rhythms (Hébert & Cuddy, 2002), presumably because the regular placement of temporal grouping accents induced a metrical pulse. Temporal and dynamic information plays an important role in how meter is conveyed in expressive musical performance. Pianists tend to emphasize events at strong metrical positions by making them longer, louder, or more legato (Drake & Palmer, 1993; Shaffer, 1981; Sloboda, 1983). These meter-related variations can aid listeners in perceiving the meter of an otherwise ambiguous musical excerpt (Palmer, Jungers, & Jusczyk, 2001; Sloboda, 1983). Performers also emphasize rhythmic grouping accents by Melodic cues to meter 6 increasing the intensity or duration of group-final events (Drake, 1993; Drake & Palmer, 1993; Repp, Windsor, & Desain, 2002). Intensity accents disambiguate the meter even in isochronous tone sequences (Windsor, 1993), or alter the perception of where a repeating pattern begins and ends (Zimba & Robin, 1998). Temporal and dynamic accents are thus important cues to meter, but pitch patterns also create points of relative salience, termed pitch accents. Pitch accent is divided into two categories: interval accent and contour accent (Jones, 1993). Interval accents are created when a pitch is substantially higher or lower than those pitches preceding or following it. One example, sometimes called a registral extreme accent, is a note in that is particularly high or low in pitch, relative to surrounding pitches (Creston, 1961; Huron & Royal, 1996; Lerdahl & Jackendoff, 1983). A second type of interval accent results when a tone is preceded by a melodic leap that is larger than surrounding melodic intervals (Graybill, 1989; Lerdahl & Jackendoff, 1983). A contour accent results from a change in the direction of melodic contour. These contour pivot points are thought to generate salience due to their position at points of melodic change (Graybill, 1989; Huron & Royal, 1996; Jones, 1993). Interval and contour accents often overlap, since registral extremes almost always coincide with contour pivot points. Figure 1 provides examples of registral extreme, interval size, and contour pivot point accents. Contour pivot points can systematically affect listeners’ judgments of metrical regularity in isochronous patterns. Thomassen (1982) created a metrical context by placing dynamic accents at periodic positions in an isochronous monotonic sequence. He then removed the dynamic accents, asking listeners to rate the metrical regularity of various 3-note melodic patterns within the established metrical context. In general, Melodic cues to meter 7 listeners gave the highest regularity ratings to contour pivot points occurring at strong metrical positions, but this depended on whether the pivot was upward or downward, and whether the subsequent pitch was repeated or not. Listeners’ judgments were used to develop a predictive model of melodic accent based on the set of all possible three-note contour configurations. The predictions of Thomassen’s model were found to correlate moderately with metrical position in musical scores (Huron & Royal, 1996). A noted disadvantage of the Thomassen model is that it only coded local contour accent, while ignoring interval size and global melodic shape, aspects of contour that could be important for accent (Huron & Royal, 1996). Music theorists have attempted to include global melodic shape in contour analysis (Marvin & Laprade, 1987; Morris, 1993). In such 'combinatorial' models, each note of a pitch pattern is considered in relation to the entire pitch pattern (Quinn, 1999). For example, a later pivot point that is higher in pitch can override an earlier local pivot point. To quantify hierarchic contour relations, Morris (1993) created a contour reduction algorithm that 'prunes' a complex melody down to its most salient pitches. Such an algorithm can be adapted for the purposes of studying contour accent by assigning a contour depth value to each note of a melody. Figure 2 provides an example of contour reduction for one melody. Despite efforts to quantify and predict pitch accent, many studies have failed to document reliable and consistent effects, especially in the presence of other accent types. In an early experiment, Woodrow (1911) reported that increases in loudness and duration, but not deviations in pitch, predicted the perceived starting position of a temporal group. When listeners were asked to tap to pitch-varied and monotonic versions of ragtime piano music, most tapping performance indexes did not differ for the two versions (Snyder & Melodic cues to meter 8 Krumhansl, 2001). Similarly, listeners’ metrical stability ratings for melodies interrupted at various stopping positions revealed higher perceived stability following temporal accents but not following melodic leaps or contour pivot point accents (Bigand, 1997). Even in expressive performance, pitch accents tend to be inconsistent and contextdependent (Drake & Palmer, 1993). Pianists emphasized pitch accents by altering loudness, inter-onset timing, and articulation, but this depended on the type of melody and the type of pitch accent. For example, pianists increased the loudness of contour pivot points in isochronous melodies but not in rhythmically varied melodies, and melodic leaps were given dynamic stress only in a piece by Beethoven. Perception and production of pitch accents was thus highly dependent on the musical context and the presence or absence of temporal accents. In contrast, other evidence suggests that pitch accent can alter how temporal and dynamic information is perceived. For example, listeners were unable to detect deviations from isochrony that occurred between two notes separated by a relatively large pitch interval (Drake, 1993). Listeners were also less sensitive to a decrease in intensity that coincided with a large melodic leap (Tekman, 1997, 1998). In addition, melodic accents may adversely affect tapping and reproduction when they conflict with temporal accents. Tapping variability was greater for patterns with conflicting pitch and temporal accents than for patterns in which they coincided (Jones & Pfordresher, 1997). Likewise, melodies containing concordant pitch and temporal accents were better remembered and reproduced than melodies containing conflicting pitch and temporal accents (Drake, Dowling, & Palmer, 1991; Monahan, Kendall, & Carterette, 1987). Thus, melodic and non-melodic accents appear to interact. Melodic cues to meter 9 Another type of melodic cue to meter is repetition, or parallelism (Temperley, 2001). A note at the beginning of a repeated melodic pattern may be accented because it marks a periodic melodic structure (Lerdahl & Jackendoff, 1983; Steedman, 1977). Some models have used rhythmic and melodic repetition to recover the scored meter in musical works (Steedman, 1977; Temperley & Bartlette, 2002). Autocorrelation of the pitch time series, another index of melodic repetition, predicted the predominant period of synchronized tapping to music (Vos, van Dijk, & Schomaker, 1994). A second type of repetition accent may result when a note is immediately repeated, by creating the impression of a longer duration (Creston, 1961). Figure 3 provides examples of pattern repetition and note repetition accents. Accents might also arise from a change in harmony, called harmonic rhythm (Dawe, Platt, & Racine, 1993, 1994, 1995; Smith & Cuddy, 1989; Temperley, 2001). When listeners were asked to name the meter of melodies having triadic accompaniments, they perceived the points of harmonic change as metrical downbeats despite conflicting temporal accents (Dawe, et.al., 1993,1994,1995). Unaccompanied melodies can imply harmony sequentially, and this implied harmony could affect how listeners perceive meter. For example, listeners were slightly faster at detecting pitch deviants in unaccompanied melodies when implied harmonic changes were aligned with strong metrical positions, but only in some meters (Smith & Cuddy, 1989). Some evidence suggests that harmonically prominent pitches are more likely to occur at strong metrical positions (Järvinen & Toiviainen, 2000). It is unclear, however, whether harmony and meter have interactive or independent effects on perception. One study obtained stability ratings for otherwise identical melodic excerpts that were taken Melodic cues to meter 10 from differing metrical or harmonic contexts, and found that meter and harmony interacted for musicians (Bigand, 1993). A later study, however, did not replicate this interaction with a new set of stimulus materials (Bigand, 1997). Even if harmony and rhythm have independent effects, harmonic prominence may alter the salience of events and thus influence metrical processing. For example, nonadjacent notes that outline important harmonic units, such as triads, may be particularly salient and thus accented. To summarize the literature on pitch accents, contour pivot points and melodic leaps appear to be relatively unreliable and context-dependent cues to meter. Pitch accents have been shown to affect expressive performance and perception of isochronous patterns, but the addition of temporal accents can override or obscure effects of pitch accent. Melodic structure may nevertheless have important effects on perception of meter when temporal information is lacking or ambiguous. Thus, to understand the role of pitch information in perceiving meter, it is necessary to manipulate the presence or absence of temporal cues. In addition, the studies that have examined pitch accent have typically been limited to a single type of contour or interval size accent despite the fact that music contains a diversity of melodic features that may contribute to meter perception. These features include more complex aspects of contour accent that incorporate global melodic features, repetition of melodic patterns, repetition of individual notes, implied harmonic change, and triadic outlining at downbeat positions. The combination of different accent types may provide stronger metrical information than any one melodic accent type alone. To test these ideas, the present experiments exploited the potential ambiguity of two familiar Western meters, simple triple (3/4) meter and compound duple (6/8) meter (Creston, 1961). In conventional metrical notation for a simple meter, the denominator Melodic cues to meter 11 specifies the relative duration of the beat, while the numerator specifies the number of beats per measure. Thus, 3/4 meter has three beats per measure, each a quarter-note (or a binary subdivision into two eighth-notes) in duration. In conventional notation for compound meter, the denominator specifies the duration of the beat’s subdivision, while the numerator specifies the total number of such subdivision units per measure. The number of beats in a ternary meter can be determined by dividing the numerator by three. Thus, 6/8 meter has six eighth-notes and two beats per measure. A measure of six eighth-notes is metrically ambiguous because it can be interpreted as three beats consisting of two eighth-notes (3/4), or two beats consisting of three eighth-notes (6/8). In the absence of performance cues such as variations in dynamics and note duration, this pattern can only be disambiguated by the presence or absence of accents at note positions within the measure. Specifically, accents on eighthnote positions 3 or 5 would reinforce three units of two, while an accent on position 4 would reinforce two units of three. The three experiments used folk melodies scored in 3/4 and 6/8 meter. The participants were asked to judge which of two drum accompaniments, one in 3/4 and one in 6/8, best matched each melody. The first two experiments used isochronous sequences in which all tones in the melody were of the same duration. The third experiment used melodies containing variations in the notated duration. All sequences were played in a mechanically regular fashion, so that tone durations corresponded exactly to the notation. The melodies were analyzed for their melodic features, and predictor variables for each type of melodic accent were compared with listeners’ judgments of the perceived meter. Melodic cues to meter 12 Experiment 1 The first experiment used a set of isochronous, 7-note folk melodies that could be interpreted in either a triple (3/4) or duple (6/8) meter. Melodies were presented at two different tempos because of the tendency to prefer inter-beat intervals of in the range of 600 ms (Fraisse, 1978). Listeners controlled the presentation of two drum accompaniments corresponding to the two different meters. For each melody, listeners indicated the perceived meter along a scale which was labeled with 'strongly 6/8' at one end of the scale to 'strongly 3/4' at the other. Intermediate positions were used to indicate more ambiguous perceptions of meter. Eleven predictor variables, derived from the empirical and theoretical literature reviewed above, coded different features of the stimuli. These variables were then used to predict responses. Method Participants. Twenty-three members (14 female, 9 male, ages 17-39) of the Cornell University community participated in the experiment. Some received course credit and some received $5 for one hour of participation. The average amount of formal musical training was 7 years (range of 0 to 25 years). None of the participants reported having perfect pitch or hearing problems. Materials. Sixty-four short, isochronous excerpts were selected from the beginnings of European folksongs in the Essen Folksong Collection, a database of approximately 6,000 melodies collected and transcribed to music notation from 19821994, under the supervision of Helmut Schaffrath (Schaffrath & Huron, 1995). Half of the excerpts were scored in 3/4 meter, and half in 6/8. Thus, excerpts could be perceived as either three beats subdivided into two eighth-notes (3/4), or two beats subdivided into Melodic cues to meter 13 three eighth-notes (6/8). Each excerpt consisted of one measure of six eighth-notes plus the first note of the following measure (see Figure 4). For each trial, the 7-note excerpts were repeated for 6 cycles, with a 5-note silence between repetitions. Practice trials consisted of three additional 7-note excerpts (also taken from the Essen database). All excerpts were presented at a fast tempo of 200 ms per event and a slow tempo of 300 ms per event. The combination of tempo and meter resulted in potential inter-beat intervals of 400 ms, 600 ms, or 900 ms. Figure 4 illustrates the four possible combinations of meter and tempo. Two percussive drum accompaniments were created for each tempo, corresponding to 3/4 and 6/8. The accompaniment in 3/4 presented a drumbeat on eighthnote positions 1, 3, and 5 of each measure (including silent sections), while the accompaniment in 6/8 presented a drumbeat on positions 1 and 4. Three buttons on the interface, labeled 'Triple', 'None', and 'Duple', controlled a gate to each drum accompaniment so that the participant could choose to listen to either of the two accompaniments or silence. All melodies were created in MIDI format and played with a grand piano timbre. A woodblock timbre was used for the drum accompaniment. The MIDI velocity, which determines loudness, was equivalent for all notes. The silent duration between the offset of a note and the onset of the following note was always 10 ms, resulting in a legato style of articulation. Apparatus. The music was played under the control of a Macintosh G4 computer using MAX 3.5.9-9 software. The MAX interface displayed instructions and response buttons to participants on a 17” Mitsubishi Diamond Plus 71 monitor. MAX played all Melodic cues to meter 14 stimuli by sending the MIDI information to Unity DS-1 1.2 software via Open Music System (OMS). Unity converts MIDI information into digital audio information. The output of Unity was then amplified by a Yamaha 1204 MC Series mixing console and presented to participants over AKG K141 headphones. Procedure. Participants were asked to compare each melody with both 3/4 and 6/8 drum accompaniments. Participants could start, stop, or change the drum accompaniments as frequently as they liked throughout the duration of the trial, but they were asked to try each accompaniment at least once. Because each melody was presented twice, the trial duration was limited to six cycles to prevent participants from memorizing metrical interpretations for individual melodies. This trial duration was sufficient for participants to try each drum accompaniment two or three times. Because the stimuli were ambiguous, we assessed listeners’ perceptions of meter using a 6-point scale labeled: strongly 6/8, moderately 6/8, weakly 6/8, weakly 3/4, moderately 3/4, or strongly 3/4. Participants were told to indicate at the end of the trial the point along the scale that corresponded to how they perceived the meter of the melody. The experiment was divided into four blocks of 32 trials preceded by a practice block of 6 trials. Each of the 128 experimental stimuli (64 melodies x two tempi) were randomly assigned to one of four experimental blocks (A, B, C, and D), with the constraint that a melody would never be repeated in a single block or in consecutive blocks. Each participant received a unique random ordering of the stimuli for each block. Consecutive trials always alternated between slow and fast tempi. Starting tempo was balanced across participants so that half started with a slow stimulus and half started with a fast stimulus. Additionally, four different block orders were balanced across Melodic cues to meter 15 participants (ABCD, BCDA, CDAB, and DABC) in a Latin Squares design. The practice trials consisted of three additional melodies (also taken from the Essen database), which were each presented at two alternating tempi. Participants were allowed to repeat the practice trials until they felt comfortable with the task, but only one did so. The experiment lasted for approximately 50 min. After the experiment all participants were given a brief questionnaire about their musical background, which included questions about formal musical training. Variable Coding. To assess the relative importance of different types of melodic information, 11 predictor variables were created. These included six pitch accent variables, two repetition variables, two harmonic variables, and one tempo variable. A general summary of variable coding is given below, and detailed descriptions of all coding rules are presented in the Appendix. Figure 5 shows how accent variables were coded for each note of three excerpts used in the experiment. Four variables, Thomassen Accent (TA), Local Pivot (LP), Contour Reduction (CR), and Global Pivot (GP), were created to code pitch accent arising from contour change. Thomassen accents were calculated for each note as specified in Thomassen (1982) using the Humdrum Toolkit (Huron, 1994). The local pivot variable was a simpler coding of contour change, assigning an accent to any note that was a contour pivot relative to its immediately adjacent notes. To address the possibility that global melodic information might affect perception of contour pivot points, two additional variables were created. The contour reduction variable was adapted from Morris (1993). Local pivot points in each melody were selected to create a subset of maxima and minima, from which local pivots were again selected to Melodic cues to meter 16 create a smaller subset, and so on until a final contour configuration remained. Each note was assigned a depth value corresponding to the number of times it was present at reduced hierarchical levels of the contour analysis. An example of contour reduction for one excerpt is presented in Figure 2. The global pivot variable was a simpler coding of local and global contour pivot points. This variable augmented the value of a local pivot by one point if it was the highest or lowest note in the melody. Two pitch accent variables, Melodic Leaps (ML) and Distance from Mean Pitch (DM), quantified pitch accent resulting from interval size and registral extreme. Melodic leaps coded accent strength for notes preceded by large melodic leaps, depending on whether the size of the preceding leap was large (four or more semitones) or very large (seven or more semitones). The distance from mean pitch variable calculated each note’s absolute distance from the mean pitch, divided by the melodic range of the excerpt to create a relative measure of pitch excursion within the melody. Two variables, Pattern Repetition (PR) and Note Repetition (NR), were created to predict accents resulting from melodic repetition. For pattern repetition, any note at the beginning of a repeated melodic fragment was given a value according to whether the repetition was exact or transposed, and the number of times the fragment was repeated throughout the melody. For note repetition, notes were given points if they were immediately repeated, and incremented further when there were additional repeated notes in the excerpt. When a single note was repeated at nonadjacent positions, it was also incremented in value at each position, depending on the number of repetitions. In order to correlate these variables with the judgments, accent values for individual positions were subsequently reduced to a single metrical prediction per excerpt. Melodic cues to meter 17 These predictions were calculated by taking the difference between the average accent values for positions 3 and 5 and the accent value for position 4. These positions in the melody were crucial for making the distinction between 3/4 and 6/8. Thus, if the averaged accent value for positions 3 and 5 was greater than the accent value on position 4, the prediction value was positive, indicating 3/4 meter. If the accent value on position 4 was greater than positions 3 and/or 5, the prediction value was negative, indicating 6/8. Because listeners were required to indicate the perceived meter of each excerpt on a scale of 1 (strongly 6/8) to 6 (strongly 3/4), a positive correlation between responses and a given variable indicated a match between that variable and listeners’ perceptions. Two harmonic variables, Triadic Outlining (TO) and Harmonic Change (HC), quantified potential effects of implied harmony. To code triadic outlining, excerpts were assigned points if they contained nonadjacent alternating notes that outlined members of a triad. The number of points assigned depended on whether the triad was major or minor, and whether the outlining was sequential or nonsequential. It was assumed that highly coherent triadic outlining, such as a sequentially outlined major triad, would be most salient, and the most points were assigned in this case. For harmonic change, the harmonic strength of note groups supporting 3/4 or 6/8 was coded according to whether those notes formed part or all of a triad. If two note groups in a given melody implied different harmonies, that melody was given a value based on the strength of both groups. An intermediate coding stage is illustrated in the center melody of Figure 5, showing individual note values for two different harmonic groups that imply 6/8. Coding methods for both harmonic variables ultimately specified single point values for entire excerpts and not for individual notes as described in the Appendix. Melodic cues to meter 18 Finally, tempo was included as a dichotomous variable (1 = fast, 2 = slow) to control for its effect on perceived meter. We anticipated that listeners would be biased towards larger subdivisions (6/8) in the fast tempo, and smaller subdivisions (3/4) in the slow tempo, because listeners tend to prefer an intermediate beat frequency at 600 ms (Fraisse, 1978; Parncutt, 1994). Results and Discussion Participants’ responses along the scale from 1 (strongly 6/8) to 6 (strongly 3/4) were used as a dependent measure of perceived meter. Mean responses for both meters and tempos are presented in Figure 6. We first assessed the effects of tempo, scored meter, and formal musical training on perceived meter. Participants were categorized into groups with high (8 or more years) or low (less than 8 years) musical training. Responses for each participant were averaged across excerpts in each meter for each tempo and submitted to a three-way mixed design analysis of variance (ANOVA), with the variable of tempo (fast vs. slow, within subjects), meter (3/4 vs. 6/8, within subjects), and musical training (low vs. high, between subjects). This analysis revealed a significant main effect of tempo, F(1,21) = 7.16, p < .05, and a significant main effect of scored meter, F(1,21) = 11.95, p < .01 but no main effect of musical training. There were no significant interactions. The lack of a main effect for musical training indicated that musicians were not better than nonmusicians at identifying the scored meter. Listeners tended to perceive slow excerpts as 3/4 and fast excerpts as 6/8. This result replicates previous findings showing that listeners tended to prefer moderately sized inter-beat intervals of 600 ms, perceiving smaller subdivisions at slow tempos, and larger subdivisions at fast tempos (Handel & Lawson, 1983; Parncutt, 1994). Melodic cues to meter 19 Participants were more likely to perceive an excerpt in the scored meter. However, the proportion of correct matches between perceived and scored meter was only 56% when responses on the scale were dichotomized into two metrical categories (1-3 = 6/8, 46 = 3/4). This relatively low proportion was not surprising given the ambiguous nature of the excerpts and the resulting difficulty of the task. We did not necessarily expect to find a match between scored and perceived meter for two primary reasons. First, because melodies in the Essen database were transcribed, scored meter reflects the metrical interpretation of the musicologist who created the transcription. Transcribed meter may have been influenced by factors present only in the original folk materials, such as the tempo, text, or expressive features of the original recording or performance. Second, even when the metrical structure of a piece is known (as in the case of a particular type of dance such as a jig or reel), musicians and composers may intentionally include conflicting cues to meter as a means of creating metrical ambiguity or tension and thus provoking interest

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

The role of melodic and temporal cues in perceiving musical meter.

A number of different cues allow listeners to perceive musical meter. Three experiments examined effects of melodic and temporal accents on perceived meter in excerpts from folk songs scored in 6/8 or 3/4 meter. Participants matched excerpts with 1 of 2 metrical drum accompaniments. Melodic accents included contour change, melodic leaps, registral extreme, melodic repetition, and harmonic rhyth...

متن کامل

Contributions of pitch contour, tonality, rhythm, and meter to melodic similarity.

The identity of a melody resides in its sequence of pitches and durations, both of which exhibit surface details as well as structural properties. In this study, pitch contour (pattern of ups and downs) served as pitch surface information, and tonality (musical key) as pitch structure; in the temporal dimension, surface information was the ordinal duration ratios of adjacent notes (rhythm), and...

متن کامل

Children's identification of familiar songs from pitch and timing cues

The goal of the present study was to ascertain whether children with normal hearing and prelingually deaf children with cochlear implants could use pitch or timing cues alone or in combination to identify familiar songs. Children 4-7 years of age were required to identify the theme songs of familiar TV shows in a simple task with excerpts that preserved (1) the relative pitch and timing cues of...

متن کامل

Social Cognition and Melodic Persistence: Where Metadata and Content Diverge

The automatic retrieval of members of a tune family from a database of melodies is potentially complicated by well documented divergences between textual metadata and musical content. We examine recently reported cases of such divergences in search of musical features which persist even when titles change or the melodies themselves vary. We find that apart from meter and mode, the rate of prese...

متن کامل

Music Perception Influences Language Acquisition: Melodic and Rhythmic-Melodic Perception in Children with Specific Language Impairment

Language and music share many properties, with a particularly strong overlap for prosody. Prosodic cues are generally regarded as crucial for language acquisition. Previous research has indicated that children with SLI fail to make use of these cues. As processing of prosodic information involves similar skills to those required in music perception, we compared music perception skills (melodic ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2004